Skip to content

Conversation

sibucan
Copy link

@sibucan sibucan commented May 4, 2022

This serves as a benchmark to compare p2p network performance results against the performance when the CNI is not involved, and to better understand what the overhead of any CNI on a given cluster.

A sample run on my cluster:

$ kubectl --kubeconfig=k.yaml get pods

NAME                       READY   STATUS      RESTARTS   AGE
knb-client-idle-55606      0/1     Completed   0          3m27s
knb-client-tcp-n2n-55606   0/1     Completed   0          2m57s
knb-client-tcp-p2p-55606   0/1     Completed   0          116s
knb-client-tcp-p2s-55606   0/1     Completed   0          55s
knb-client-udp-n2n-55606   0/1     Completed   0          2m26s
knb-client-udp-p2p-55606   0/1     Completed   0          86s
knb-client-udp-p2s-55606   0/1     Completed   0          24s
knb-monitor-client-55606   1/1     Running     0          4m2s
knb-monitor-server-55606   1/1     Running     0          4m8s
knb-n2n-server-55606       1/1     Running     0          3m45s
knb-p2p-server-55606       1/1     Running     0          3m52s

And the results (certain parts snipped for privacy):

=========================================================
 Benchmark Results
=========================================================
 Name            : knb-55606
 Date            : 2022-05-04 20:19:43 UTC
 Generator       : knb
 Version         : 1.5.0
 Server          : node-1
 Client          : node-2
 UDP Socket size : auto
=========================================================
  Discovered CPU         : <SNIPPED>
  Discovered Kernel      : <SNIPPED>
  Discovered k8s version : v1.22.9
  Discovered MTU         : 1480
  Idle :
    bandwidth = 0 Mbit/s
    client cpu = total 8.88% (user 6.32%, nice 0.00%, system 2.47%, iowait 0.09%, steal 0.00%)
    server cpu = total 1.72% (user 1.06%, nice 0.00%, system 0.63%, iowait 0.00%, steal 0.03%)
    client ram = 676 MB
    server ram = 993 MB
  Node to node :
    TCP :
      bandwidth = 2003 Mbit/s
      client cpu = total 15.92% (user 5.87%, nice 0.00%, system 10.05%, iowait 0.00%, steal 0.00%)
      server cpu = total 2.82% (user 1.05%, nice 0.00%, system 1.72%, iowait 0.00%, steal 0.05%)
      client ram = 680 MB
      server ram = 1002 MB
    UDP :
      bandwidth = 1097 Mbit/s
      client cpu = total 75.94% (user 8.94%, nice 0.00%, system 67.00%, iowait 0.00%, steal 0.00%)
      server cpu = total 7.35% (user 1.87%, nice 0.00%, system 5.41%, iowait 0.00%, steal 0.07%)
      client ram = 678 MB
      server ram = 1003 MB
  Pod to pod :
    TCP :
      bandwidth = 1577 Mbit/s
      client cpu = total 29.21% (user 7.94%, nice 0.00%, system 21.07%, iowait 0.10%, steal 0.10%)
      server cpu = total 3.32% (user 1.17%, nice 0.00%, system 2.08%, iowait 0.02%, steal 0.05%)
      client ram = 684 MB
      server ram = 1000 MB
    UDP :
      bandwidth = 510 Mbit/s
      client cpu = total 90.92% (user 7.43%, nice 0.00%, system 83.40%, iowait 0.09%, steal 0.00%)
      server cpu = total 3.72% (user 1.24%, nice 0.00%, system 2.44%, iowait 0.02%, steal 0.02%)
      client ram = 681 MB
      server ram = 1004 MB
  Pod to Service :
    TCP :
      bandwidth = 2028 Mbit/s
      client cpu = total 26.83% (user 5.45%, nice 0.00%, system 21.38%, iowait 0.00%, steal 0.00%)
      server cpu = total 3.48% (user 1.03%, nice 0.00%, system 2.38%, iowait 0.02%, steal 0.05%)
      client ram = 692 MB
      server ram = 1002 MB
    UDP :
      bandwidth = 491 Mbit/s
      client cpu = total 97.90% (user 7.57%, nice 0.00%, system 90.23%, iowait 0.10%, steal 0.00%)
      server cpu = total 3.14% (user 1.16%, nice 0.00%, system 1.92%, iowait 0.00%, steal 0.06%)
      client ram = 698 MB
      server ram = 1000 MB
=========================================================

This serves as a benchmark to compare p2p network results against the
performance when the CNI is not involved.
This is done by running pods with `hostNetwork: true` set in the spec.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant